通过改变肌肉僵硬来适应符合性的能力对于人类灵巧的操纵技巧至关重要。在机器人电动机控制中纳入合规性对于执行具有人级敏捷性的现实力量相互作用任务至关重要。这项工作为合规机器人操作提供了一个深层的模型预测性变量阻抗控制器,该阻抗操纵结合了可变阻抗控制与模型预测控制(MPC)。使用最大化信息增益的勘探策略学习了机器人操纵器的广义笛卡尔阻抗模型。该模型在MPC框架内使用,以适应低级变量阻抗控制器的阻抗参数,以实现针对不同操纵任务的所需合规性行为,而无需进行任何重新培训或填充。使用Franka Emika Panda机器人操纵器在模拟和实际实验中运行的操作,使用Franka Emika Panda机器人操纵器评估深层模型预测性变量阻抗控制方法。将所提出的方法与无模型和基于模型的强化方法进行了比较,以可变阻抗控制,以进行任务和性能之间的可传递性。
translated by 谷歌翻译
Classically, the development of humanoid robots has been sequential and iterative. Such bottom-up design procedures rely heavily on intuition and are often biased by the designer's experience. Exploiting the non-linear coupled design space of robots is non-trivial and requires a systematic procedure for exploration. We adopt the top-down design strategy, the V-model, used in automotive and aerospace industries. Our co-design approach identifies non-intuitive designs from within the design space and obtains the maximum permissible range of the design variables as a solution space, to physically realise the obtained design. We show that by constructing the solution space, one can (1) decompose higher-level requirements onto sub-system-level requirements with tolerance, alleviating the "chicken-or-egg" problem during the design process, (2) decouple the robot's morphology from its controller, enabling greater design flexibility, (3) obtain independent sub-system level requirements, reducing the development time by parallelising the development process.
translated by 谷歌翻译
Of late, insurance fraud detection has assumed immense significance owing to the huge financial & reputational losses fraud entails and the phenomenal success of the fraud detection techniques. Insurance is majorly divided into two categories: (i) Life and (ii) Non-life. Non-life insurance in turn includes health insurance and auto insurance among other things. In either of the categories, the fraud detection techniques should be designed in such a way that they capture as many fraudulent transactions as possible. Owing to the rarity of fraudulent transactions, in this paper, we propose a chaotic variational autoencoder (C-VAE to perform one-class classification (OCC) on genuine transactions. Here, we employed the logistic chaotic map to generate random noise in the latent space. The effectiveness of C-VAE is demonstrated on the health insurance fraud and auto insurance datasets. We considered vanilla Variational Auto Encoder (VAE) as the baseline. It is observed that C-VAE outperformed VAE in both datasets. C-VAE achieved a classification rate of 77.9% and 87.25% in health and automobile insurance datasets respectively. Further, the t-test conducted at 1% level of significance and 18 degrees of freedom infers that C-VAE is statistically significant than the VAE.
translated by 谷歌翻译
当我们继续找到当前可用的嘈杂设备比其经典配音具有优势的应用程序时,高效利用量子资源是非常可取的。提出了量子自动编码器的概念,是压缩量子信息以减少资源需求的一种方式。在这里,我们提出了一种使用进化算法来设计量子自动编码器的策略,以将量子信息转换为较低维表示。我们成功地证明了该算法在压缩量子状态的不同家族中的初始应用。特别是,我们指出,使用算法中的限制门设置可以有效地模拟生成的电路。这种方法可以使用更少的计算资源来使用经典逻辑来找到量子数据的低表示。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
Y.Cho和L.K.提出了使用多层内核机(MKMS)的基于内核的深度学习。扫罗\ cite {saul}。在MKMS中,它们仅在基于内核PCA的特征提取的图层中使用一个内核(arc-casine内核)。我们建议通过在无监督的学习策略之后通过许多内核的凸组合使用多个内核。通过在Mnist DataSet的图像背景中添加随机噪声生成的\ texit {mnist-back-rand},\ textit {mnist-back-image}和\ textit {mnist-resti-image}数据集进行了实证研究。实验结果表明,MKM中的MKL赢得了原始数据的更好表示并提高了分类器性能。
translated by 谷歌翻译
我们为因果区段发现提供了端到端的方法论框架,旨在在大规模数字实验中揭示跨越用户亚组的治疗的差异影响。建立因因果推断和非/半参数统计的最新发展,我们的方法统一了两个目标:(1)发现基于亚组特定治疗效果的候选治疗的用户群体的发现,(2)基于预测分部特定的益处或危害评估动态分配单位对研究治疗部队的因果影响。我们的提议是模型 - 不可知论,能够将最先进的机器学习算法纳入估计过程,并且适用于随机A / B测试和准实验。介绍了开源R封装实现,Sherlock。
translated by 谷歌翻译
背景:精确诊断颅底肿瘤对于提供个性化的手术治疗策略至关重要。由于肿瘤多样性和缺乏术中病理资源,术中诊断可能具有挑战性。目的:开发独立且平行的术中病理学工作流程,可以使用无标签的光学成像和人工智能提供快速准确的颅底肿瘤诊断。方法:我们使用了基于光纤激光,无标签,非消费性,高分辨率显微镜方法($ <$ <$ <$ <$ 60秒,每1 $ \ times $ 1 mm $ $^\ text {2} $),称为刺激的拉曼组织学(SRH),以对颅底肿瘤患者的连续多中心队列进行成像。然后,使用三种表示学习策略:跨渗透性,自我监督的对比度学习和监督对比度学习,使用SRH图像来训练卷积神经网络(CNN)模型。我们训练有素的CNN模型在持有的多中心SRH数据集上进行了测试。结果:SRH能够成像良性和恶性颅底肿瘤的诊断特征。在三种表示策略中,有监督的对比度学习最有效地学习了每种颅底肿瘤类型的独特和诊断SRH图像特征。在我们的多中心测试集中,跨渗透性达到了91.5%的总体诊断准确性,自我监督的对比度学习为83.9%,并且有监督的对比度学习为96.6%。我们训练有素的模型能够鉴定出肿瘤正常的边缘,并检测整个SRH图像中微观肿瘤浸润的区域。结论:具有训练有素的人工智能模型的SRH可以对颅底肿瘤标本进行快速准确的术中分析,以告知手术决策。
translated by 谷歌翻译
We study the best-arm identification problem in multi-armed bandits with stochastic, potentially private rewards, when the goal is to identify the arm with the highest quantile at a fixed, prescribed level. First, we propose a (non-private) successive elimination algorithm for strictly optimal best-arm identification, we show that our algorithm is $\delta$-PAC and we characterize its sample complexity. Further, we provide a lower bound on the expected number of pulls, showing that the proposed algorithm is essentially optimal up to logarithmic factors. Both upper and lower complexity bounds depend on a special definition of the associated suboptimality gap, designed in particular for the quantile bandit problem, as we show when the gap approaches zero, best-arm identification is impossible. Second, motivated by applications where the rewards are private, we provide a differentially private successive elimination algorithm whose sample complexity is finite even for distributions with infinite support-size, and we characterize its sample complexity. Our algorithms do not require prior knowledge of either the suboptimality gap or other statistical information related to the bandit problem at hand.
translated by 谷歌翻译